80 research outputs found

    Mitigation of Sampling Errors in VCO-Based ADCs

    Full text link

    Active feature acquisition for opinion stream classification under drift

    Get PDF
    Active stream learning is frequently used to acquire labels for instances and less frequently to determine which features should be considered as the stream evolves. We introduce a framework for active feature selection, intended to adapt the feature space of a polarity learner over a stream of opinionated documents. We report on the first results of our framework on substreams of reviews on different product categories

    Active feature acquisition on data streams under feature drift

    Get PDF
    Traditional active learning tries to identify instances for which the acquisition of the label increases model performance under budget constraints. Less research has been devoted to the task of actively acquiring feature values, whereupon both the instance and the feature must be selected intelligently and even less to a scenario where the instances arrive in a stream with feature drift. We propose an active feature acquisition strategy for data streams with feature drift, as well as an active feature acquisition evaluation framework. We also implement a baseline that chooses features randomly and compare the random approach against eight different methods in a scenario where we can acquire at most one feature at the time per instance and where all features are considered to cost the same. Our initial experiments on 9 different data sets, with 7 different degrees of missing features and 8 different budgets show that our developed methods outperform the random acquisition on 7 data sets and have a comparable performance on the remaining two. © 2020, The Author(s)

    Resources For Evidence-Based Health Care: Accessibility And Availability

    Get PDF
    Evidence – Based Practice (EBP) is a problem solving approach to clinical care that incorporates the conscientious use of current best evidence from well-designed studies,  clinician’s expertise, and patient values and preferences (Melnyk & Fineout-Overholt, 2005; Sackett, Straus, Richardson, Rosenberg, &Haynes, 2000). It is important to see clinical expertise as the ability to integrate research evidence and patients' circumstances and preferences to help patients arrive at optimal decisions (Guyatt, Cook,& Haynes, 2004). Research has shown that patient outcomes are 28% better when clinical care is based upon evidence, versus clinical practice steeped in tradition (Heater, Becker, & Olsen, 1998).   The process of EBP minimizes the translation time needed for incorporating research findings into practice and clarifies the differences between ritualistic practice, habitual approaches, personal preferences, anecdotal experiences, empirical data, and statistical significance to support nursing practice (Alspach, 2006). The availability of evidence based practice tools and methods helps in faster identification of the best available evidence to provide care at the point it matters most.   Implementing EBP in health care is complex and challenging. One of the main components of EBP is retrieving evidence from different sources. Information explosion with thousands of health literature and research papers published every year has created a need to expand the knowledge base for providing evidence based health care worldwide. Retrieval of evidence from various sources may be difficult due to several reasons. It may be difficult for health professionals to find the best available evidence due to time constraints (Ervin, 2002) or lack of knowledge among health professionals to effectively search for evidence (Sitzia, 2002). It is even difficult to find authentic sources of evidence

    Resource management for model learning at entity level

    Get PDF
    Many current and future applications plan to provide entity-specific predictions. These range from individualized healthcare applications to user-specific purchase recommendations. In our previous stream-based work on Amazon review data, we could show that error-weighted ensembles that combine entity-centric classifiers, which are only trained on reviews of one particular product (entity), and entity-ignorant classifiers, which are trained on all reviews irrespective of the product, can improve prediction quality. This came at the cost of storing multiple entity-centric models in primary memory, many of which would never be used again as their entities would not receive future instances in the stream. To overcome this drawback and make entity-centric learning viable in these scenarios, we investigated two different methods of reducing the primary memory requirement of our entity-centric approach. Our first method uses the lossy counting algorithm for data streams to identify entities whose instances make up a certain percentage of the total data stream within an error-margin. We then store all models which do not fulfil this requirement in secondary memory, from which they can be retrieved in case future instances belonging to them should arrive later in the stream. The second method replaces entity-centric models with a much more naive model which only stores the past labels and predicts the majority label seen so far. We applied our methods on the previously used Amazon data sets which contained up to 1.4M reviews and added two subsets of the Yelp data set which contain up to 4.2M reviews. Both methods were successful in reducing the primary memory requirements while still outperforming an entity-ignorant model. © 2020, The Author(s)

    Entity-level stream classification: exploiting entity similarity to label the future observations referring to an entity

    Get PDF
    Stream classification algorithms traditionally treat arriving instances as independent. However, in many applications, the arriving examples may depend on the “entity” that generated them, e.g. in product reviews or in the interactions of users with an application server. In this study, we investigate the potential of this dependency by partitioning the original stream of instances/“observations” into entity-centric substreams and by incorporating entity-specific information into the learning model. We propose a k-nearest-neighbour-inspired stream classification approach, in which the label of an arriving observation is predicted by exploiting knowledge on the observations belonging to this entity and to entities similar to it. For the computation of entity similarity, we consider knowledge about the observations and knowledge about the entity, potentially from a domain/feature space different from that in which predictions are made. To distinguish between cases where this knowledge transfer is beneficial for stream classification and cases where the knowledge on the entities does not contribute to classifying the observations, we also propose a heuristic approach based on random sampling of substreams using k Random Entities (kRE). Our learning scenario is not fully supervised: after acquiring labels for the initial m observations of each entity, we assume that no additional labels arrive and attempt to predict the labels of near-future and far-future observations from that initial seed. We report on our findings from three datasets

    A 0.9-Nyquist-Band Digital Timing Mismatch Correction for Time-Interleaved ADCs Achieving Delay Tuning Range of 0.12-Sample-Period

    Get PDF
    Time-interleaved analog-to-digital converters (TIADC) require channel matching in terms of offset, gain, and sampling clock skew to achieve best data conversion performance. Conventionally, correction of skew mismatch is realized with analog delay lines, making it challenging for high-speed ADC designs to achieve fine delay resolution over wide tuning range while maintaining low clock jitter. Digital skew correction allows greater flexibility than analog solutions, but is hindered by a significant hardware footprint. This paper demonstrates digital filter-based timing skew correction approach suitable for on-chip implementation. In a 10-bit 8-channel TI-ADC the proposed structure corrects mismatch magnitudes up to 0.12 sample period across 0.9 Nyquist band while requiring only 65% hardware of similar architectures of equivalent performance. The presented digital circuit uses reduced combinational paths and operates at a clock rate of single ADC channel, making it applicable for digitally-assisted high-speed TI-ADCs.acceptedVersionPeer reviewe

    Design of Cyclic-Coupled Ring Oscillators with Guaranteed Maximal Phase Resolution

    Get PDF
    Cyclic-coupled ring oscillators (CCRO), which consist of M ring oscillators each with N inverting stages, can be used in time-domain data converters to achieve sub-gate-delay resolution and improved phase noise performance compared to a single ring oscillator (RO). However, CCROs can oscillate in several different oscillation modes, where some modes contain overlapping phases. Such in-phase oscillations severely degrade the performance of a time-domain data converter by undermining the sub-gate-delay of the CCRO. This paper presents a design method to avoid the undesired in-phase oscillation modes, and thus achieve guaranteed maximal phase resolution regardless of the oscillation mode, by properly selecting the CCRO dimensions N and M. We show, both theoretically and with transistor-level simulations, that mode-agnostic maximum phase resolution can be ensured by selecting a prime M together with an N which is co-prime with M.acceptedVersionPeer reviewe

    Low-Complexity Feedback Data Compression for Closed-Loop Digital Predistortion

    Get PDF
    This paper proposes sample combining as a low-complex and effective feedback data compression technique that allows to significantly reduce the computational effort and buffering needs for parameter adaptation in a closed-loop digital predistortion (DPD) system. Compression is achieved by applying an integrate & dump operation to an undersampled feedback signal. The proposed method is experimentally validated for RF measurement based behavioral modeling as well as closed-loop DPD of a 3.5 GHz GaN Doherty PA, taking also quantization effects of the feedback path into account. Our results demonstrate that the proposed technique is as capable as state-of-the-art histogram-based sample selection, however, at a much lower complexity.Peer reviewe

    Fully Digital On-Chip Wideband Background Calibration for Channel Mismatches in Time-Interleaved Time-Based ADCs

    Get PDF
    This letter presents a fully integrated on-chip digital mismatch compensation system for time-based time-interleaved (TI) data converters. The proposed digital compensation features blind calibration of gain, offset, and timing mismatches. The implemented system uses time-based sampling clock mismatch detection, achieving convergence within 32K samples, which is on par with analog-assisted background methods. A specialized filter structure compensates for timing mismatches of magnitude up to 0.21 of the sampling period, nearly triple the range of other published digital compensation methods, and is effective for input signals up to 0.92 Nyquist bandwidth. The on-chip digital correction achieves suppression of all mismatch tones to levels below −60 dBc while running fully in the background. The operation is demonstrated with an 8× TI 2-GS/s analog-to-digital converter (ADC) prototype chip implemented in a 28-nm CMOS process.publishedVersionPeer reviewe
    corecore